Extended Global Convergence Framework for Unconstrained Optimization
نویسندگان
چکیده
منابع مشابه
Extended Global Convergence Framework for Unconstrained
An extension of the global convergence framework for unconstrained derivative free optimization methods is presented. The extension makes it possible for the framework to include optimization methods with varying cardinality of the ordered direction set. Grid-based search methods are shown to be a special case of the more general extended global convergence framework. Furthermore, the required ...
متن کاملGlobal Convergence of a Memory Gradient Method for Unconstrained Optimization
The memory gradient method is used for unconstrained optimization, especially large scale problems. The first idea of memory gradient method was proposed by Miele and Cantrell (1969) and Cragg and Levy (1969). In this paper, we present a new memory gradient method which generates a descent search direction for the objective function at every iteration. We show that our method converges globally...
متن کاملSemidefinite Approximations for Global Unconstrained Polynomial Optimization
We consider the problem of minimizing a polynomial function on R, known to be hard even for degree 4 polynomials. Therefore approximation algorithms are of interest. Lasserre [15] and Parrilo [23] have proposed approximating the minimum of the original problem using a hierarchy of lower bounds obtained via semidefinite programming relaxations. We propose here a method for computing tight upper ...
متن کاملA Trust-region Method using Extended Nonmonotone Technique for Unconstrained Optimization
In this paper, we present a nonmonotone trust-region algorithm for unconstrained optimization. We first introduce a variant of the nonmonotone strategy proposed by Ahookhosh and Amini cite{AhA 01} and incorporate it into the trust-region framework to construct a more efficient approach. Our new nonmonotone strategy combines the current function value with the maximum function values in some pri...
متن کاملOn the Global Convergence of the PERRY-SHANNO Method for Nonconvex Unconstrained Optimization Problems
In this paper, we prove the global convergence of the Perry-Shanno’s memoryless quasi-Newton (PSMQN) method with a new inexact line search when applied to nonconvex unconstrained minimization problems. Preliminary numerical results show that the PSMQN with the particularly line search conditions are very promising.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Acta Mathematica Sinica, English Series
سال: 2004
ISSN: 1439-8516,1439-7617
DOI: 10.1007/s10114-004-0340-4